Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 1.555
Filtrar
1.
IEEE Trans Image Process ; 33: 2502-2513, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38526904

RESUMO

Residual coding has gained prevalence in lossless compression, where a lossy layer is initially employed and the reconstruction errors (i.e., residues) are then losslessly compressed. The underlying principle of the residual coding revolves around the exploration of priors based on context modeling. Herein, we propose a residual coding framework for 3D medical images, involving the off-the-shelf video codec as the lossy layer and a Bilateral Context Modeling based Network (BCM-Net) as the residual layer. The BCM-Net is proposed to achieve efficient lossless compression of residues through exploring intra-slice and inter-slice bilateral contexts. In particular, a symmetry-based intra-slice context extraction (SICE) module is proposed to mine bilateral intra-slice correlations rooted in the inherent anatomical symmetry of 3D medical images. Moreover, a bi-directional inter-slice context extraction (BICE) module is designed to explore bilateral inter-slice correlations from bi-directional references, thereby yielding representative inter-slice context. Experiments on popular 3D medical image datasets demonstrate that the proposed method can outperform existing state-of-the-art methods owing to efficient redundancy reduction. Our code will be available on GitHub for future research.


Assuntos
Compressão de Dados , Compressão de Dados/métodos , Imageamento Tridimensional/métodos
2.
Sci Rep ; 14(1): 5168, 2024 03 02.
Artigo em Inglês | MEDLINE | ID: mdl-38431641

RESUMO

Magnetic resonance imaging is a medical imaging technique to create comprehensive images of the tissues and organs in the body. This study presents an advanced approach for storing and compressing neuroimaging informatics technology initiative files, a standard format in magnetic resonance imaging. It is designed to enhance telemedicine services by facilitating efficient and high-quality communication between healthcare practitioners and patients. The proposed downsampling approach begins by opening the neuroimaging informatics technology initiative file as volumetric data and then planning it into several slice images. Then, the quantization hiding technique will be applied to each of the two consecutive slice images to generate the stego slice with the same size. This involves the following major steps: normalization, microblock generation, and discrete cosine transformation. Finally, it assembles the resultant stego slice images to produce the final neuroimaging informatics technology initiative file as volumetric data. The upsampling process, designed to be completely blind, reverses the downsampling steps to reconstruct the subsequent image slice accurately. The efficacy of the proposed method was evaluated using a magnetic resonance imaging dataset, focusing on peak signal-to-noise ratio, signal-to-noise ratio, structural similarity index, and Entropy as key performance metrics. The results demonstrate that the proposed approach not only significantly reduces file sizes but also maintains high image quality.


Assuntos
Compressão de Dados , Telemedicina , Humanos , Compressão de Dados/métodos , Imageamento por Ressonância Magnética/métodos , Neuroimagem , Razão Sinal-Ruído
3.
Sci Rep ; 14(1): 5087, 2024 03 01.
Artigo em Inglês | MEDLINE | ID: mdl-38429300

RESUMO

When traditional EEG signals are collected based on the Nyquist theorem, long-time recordings of EEG signals will produce a large amount of data. At the same time, limited bandwidth, end-to-end delay, and memory space will bring great pressure on the effective transmission of data. The birth of compressed sensing alleviates this transmission pressure. However, using an iterative compressed sensing reconstruction algorithm for EEG signal reconstruction faces complex calculation problems and slow data processing speed, limiting the application of compressed sensing in EEG signal rapid monitoring systems. As such, this paper presents a non-iterative and fast algorithm for reconstructing EEG signals using compressed sensing and deep learning techniques. This algorithm uses the improved residual network model, extracts the feature information of the EEG signal by one-dimensional dilated convolution, directly learns the nonlinear mapping relationship between the measured value and the original signal, and can quickly and accurately reconstruct the EEG signal. The method proposed in this paper has been verified by simulation on the open BCI contest dataset. Overall, it is proved that the proposed method has higher reconstruction accuracy and faster reconstruction speed than the traditional CS reconstruction algorithm and the existing deep learning reconstruction algorithm. In addition, it can realize the rapid reconstruction of EEG signals.


Assuntos
Compressão de Dados , Aprendizado Profundo , Processamento de Sinais Assistido por Computador , Compressão de Dados/métodos , Algoritmos , Eletroencefalografia/métodos
4.
BMC Genomics ; 25(1): 266, 2024 Mar 09.
Artigo em Inglês | MEDLINE | ID: mdl-38461245

RESUMO

BACKGROUND: DNA storage has the advantages of large capacity, long-term stability, and low power consumption relative to other storage mediums, making it a promising new storage medium for multimedia information such as images. However, DNA storage has a low coding density and weak error correction ability. RESULTS: To achieve more efficient DNA storage image reconstruction, we propose DNA-QLC (QRes-VAE and Levenshtein code (LC)), which uses the quantized ResNet VAE (QRes-VAE) model and LC for image compression and DNA sequence error correction, thus improving both the coding density and error correction ability. Experimental results show that the DNA-QLC encoding method can not only obtain DNA sequences that meet the combinatorial constraints, but also have a net information density that is 2.4 times higher than DNA Fountain. Furthermore, at a higher error rate (2%), DNA-QLC achieved image reconstruction with an SSIM value of 0.917. CONCLUSIONS: The results indicate that the DNA-QLC encoding scheme guarantees the efficiency and reliability of the DNA storage system and improves the application potential of DNA storage for multimedia information such as images.


Assuntos
Algoritmos , Compressão de Dados , Reprodutibilidade dos Testes , DNA/genética , Compressão de Dados/métodos , Processamento de Imagem Assistida por Computador/métodos
5.
Bioinformatics ; 40(3)2024 Mar 04.
Artigo em Inglês | MEDLINE | ID: mdl-38377404

RESUMO

MOTIVATION: Seeding is a rate-limiting stage in sequence alignment for next-generation sequencing reads. The existing optimization algorithms typically utilize hardware and machine-learning techniques to accelerate seeding. However, an efficient solution provided by professional next-generation sequencing compressors has been largely overlooked by far. In addition to achieving remarkable compression ratios by reordering reads, these compressors provide valuable insights for downstream alignment that reveal the repetitive computations accounting for more than 50% of seeding procedure in commonly used short read aligner BWA-MEM at typical sequencing coverage. Nevertheless, the exploited redundancy information is not fully realized or utilized. RESULTS: In this study, we present a compressive seeding algorithm, named CompSeed, to fill the gap. CompSeed, in collaboration with the existing reordering-based compression tools, finishes the BWA-MEM seeding process in about half the time by caching all intermediate seeding results in compact trie structures to directly answer repetitive inquiries that frequently cause random memory accesses. Furthermore, CompSeed demonstrates better performance as sequencing coverage increases, as it focuses solely on the small informative portion of sequencing reads after compression. The innovative strategy highlights the promising potential of integrating sequence compression and alignment to tackle the ever-growing volume of sequencing data. AVAILABILITY AND IMPLEMENTATION: CompSeed is available at https://github.com/i-xiaohu/CompSeed.


Assuntos
Compressão de Dados , Software , Análise de Sequência de DNA/métodos , Algoritmos , Compressão de Dados/métodos , Computadores , Sequenciamento de Nucleotídeos em Larga Escala/métodos
6.
IUCrJ ; 11(Pt 2): 190-201, 2024 Mar 01.
Artigo em Inglês | MEDLINE | ID: mdl-38327201

RESUMO

Serial crystallography (SX) has become an established technique for protein structure determination, especially when dealing with small or radiation-sensitive crystals and investigating fast or irreversible protein dynamics. The advent of newly developed multi-megapixel X-ray area detectors, capable of capturing over 1000 images per second, has brought about substantial benefits. However, this advancement also entails a notable increase in the volume of collected data. Today, up to 2 PB of data per experiment could be easily obtained under efficient operating conditions. The combined costs associated with storing data from multiple experiments provide a compelling incentive to develop strategies that effectively reduce the amount of data stored on disk while maintaining the quality of scientific outcomes. Lossless data-compression methods are designed to preserve the information content of the data but often struggle to achieve a high compression ratio when applied to experimental data that contain noise. Conversely, lossy compression methods offer the potential to greatly reduce the data volume. Nonetheless, it is vital to thoroughly assess the impact of data quality and scientific outcomes when employing lossy compression, as it inherently involves discarding information. The evaluation of lossy compression effects on data requires proper data quality metrics. In our research, we assess various approaches for both lossless and lossy compression techniques applied to SX data, and equally importantly, we describe metrics suitable for evaluating SX data quality.


Assuntos
Algoritmos , Compressão de Dados , Cristalografia , Compressão de Dados/métodos , Tomografia Computadorizada por Raios X
7.
Gene ; 907: 148235, 2024 May 20.
Artigo em Inglês | MEDLINE | ID: mdl-38342250

RESUMO

Next Generation Sequencing (NGS) technology generates massive amounts of genome sequence that increases rapidly over time. As a result, there is a growing need for efficient compression algorithms to facilitate the processing, storage, transmission, and analysis of large-scale genome sequences. Over the past 31 years, numerous state-of-the-art compression algorithms have been developed. The performance of any compression algorithm is measured by three main compression metrics: compression ratio, time, and memory usage. Existing k-mer hash indexing systems take more time, due to the decision-making process based on compression results. In this paper, we propose a two-phase reference genome compression algorithm using optimal k-mer length (RGCOK). Reference-based compression takes advantage of the inter-similarity between chromosomes of the same species. RGCOK achieves this by finding the optimal k-mer length for matching, using a randomization method and hashing. The performance of RGCOK was evaluated on three different benchmark data sets: novel severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), Homo sapiens, and other species sequences using an Amazon AWS virtual cloud machine. Experiments showed that the optimal k-mer finding time by RGCOK is around 45.28 min, whereas the time for existing state-of-the-art algorithms HiRGC, SCCG, and HRCM ranges from 58 min to 8.97 h.


Assuntos
Compressão de Dados , Software , Humanos , Compressão de Dados/métodos , Algoritmos , Genoma , Sequenciamento de Nucleotídeos em Larga Escala/métodos , Análise de Sequência de DNA/métodos
8.
Magn Reson Imaging ; 108: 116-128, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38325727

RESUMO

To improve the efficiency of multi-coil data compression and recover the compressed image reversibly, increasing the possibility of applying the proposed method to medical scenarios. A deep learning algorithm is employed for MR coil compression in the presented work. The approach introduces a variable augmentation network for invertible coil compression (VAN-ICC). This network utilizes the inherent reversibility of normalizing flow-based models. The aim is to enhance the readability of the sentence and clearly convey the key components of the algorithm. By applying the variable augmentation technology to image/k-space variables from multi-coils, VAN-ICC trains the invertible network by finding an invertible and bijective function, which can map the original data to the compressed counterpart and vice versa. Experiments conducted on both fully-sampled and under-sampled data verified the effectiveness and flexibility of VAN-ICC. Quantitative and qualitative comparisons with traditional non-deep learning-based approaches demonstrated that VAN-ICC carries much higher compression effects. The proposed method trains the invertible network by finding an invertible and bijective function, which improves the defects of traditional coil compression method by utilizing inherent reversibility of normalizing flow-based models. In addition, the application of variable augmentation technology ensures the implementation of reversible networks. In short, VAN-ICC offered a competitive advantage over other traditional coil compression algorithms.


Assuntos
Compressão de Dados , Compressão de Dados/métodos , Imageamento por Ressonância Magnética/métodos , Algoritmos , Processamento de Imagem Assistida por Computador/métodos
9.
IEEE Trans Image Process ; 33: 408-422, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38133987

RESUMO

The accelerated proliferation of visual content and the rapid development of machine vision technologies bring significant challenges in delivering visual data on a gigantic scale, which shall be effectively represented to satisfy both human and machine requirements. In this work, we investigate how hierarchical representations derived from the advanced generative prior facilitate constructing an efficient scalable coding paradigm for human-machine collaborative vision. Our key insight is that by exploiting the StyleGAN prior, we can learn three-layered representations encoding hierarchical semantics, which are elaborately designed into the basic, middle, and enhanced layers, supporting machine intelligence and human visual perception in a progressive fashion. With the aim of achieving efficient compression, we propose the layer-wise scalable entropy transformer to reduce the redundancy between layers. Based on the multi-task scalable rate-distortion objective, the proposed scheme is jointly optimized to achieve optimal machine analysis performance, human perception experience, and compression ratio. We validate the proposed paradigm's feasibility in face image compression. Extensive qualitative and quantitative experimental results demonstrate the superiority of the proposed paradigm over the latest compression standard Versatile Video Coding (VVC) in terms of both machine analysis as well as human perception at extremely low bitrates (< 0.01 bpp), offering new insights for human-machine collaborative compression.


Assuntos
Compressão de Dados , Humanos , Compressão de Dados/métodos , Processamento de Sinais Assistido por Computador , Algoritmos , Aumento da Imagem/métodos , Interpretação de Imagem Assistida por Computador/métodos , Gravação em Vídeo/métodos
10.
BMC Bioinformatics ; 24(1): 437, 2023 Nov 21.
Artigo em Inglês | MEDLINE | ID: mdl-37990290

RESUMO

BACKGROUND: Because of the rapid generation of data, the study of compression algorithms to reduce storage and transmission costs is important to bioinformaticians. Much of the focus has been on sequence data, including both genomes and protein amino acid sequences stored in FASTA files. Current standard practice is to use an ordinary lossless compressor such as gzip on a sequential list of atomic coordinates, but this approach expends bits on saving an arbitrary ordering of atoms, and it also prevents reordering the atoms for compressibility. The standard MMTF and BCIF file formats extend this approach with custom encoding of the coordinates. However, the brand new Foldcomp tool introduces a new paradigm of compressing local angles, to great effect. In this article, we explore a different paradigm, showing for the first time that image-based compression using global angles can also significantly improve compression ratios. To this end, we implement a prototype compressor 'PIC', specialized for point clouds of atom coordinates contained in PDB and mmCIF files. PIC maps the 3D data to a 2D 8-bit greyscale image and leverages the well developed PNG image compressor to minimize the size of the resulting image, forming the compressed file. RESULTS: PIC outperforms gzip in terms of compression ratio on proteins over 20,000 atoms in size, with a savings over gzip of up to 37.4% on the proteins compressed. In addition, PIC's compression ratio increases with protein size. CONCLUSION: Image-centric compression as demonstrated by our prototype PIC provides a potential means of constructing 3D structure-aware protein compression software, though future work would be necessary to make this practical.


Assuntos
Compressão de Dados , Compressão de Dados/métodos , Algoritmos , Software , Genoma
11.
Funct Integr Genomics ; 23(4): 333, 2023 Nov 11.
Artigo em Inglês | MEDLINE | ID: mdl-37950100

RESUMO

Hospitals and medical laboratories create a tremendous amount of genome sequence data every day for use in research, surgery, and illness diagnosis. To make storage comprehensible, compression is therefore essential for the storage, monitoring, and distribution of all these data. A novel data compression technique is required to reduce the time as well as the cost of storage, transmission, and data processing. General-purpose compression techniques do not perform so well for these data due to their special features: a large number of repeats (tandem and palindrome), small alphabets, and highly similar, and specific file formats. In this study, we provide a method for compressing FastQ files that uses a reference genome as a backup without sacrificing data quality. FastQ files are initially split into three streams (identifier, sequence, and quality score), each of which receives its own compression technique. A novel quick and lightweight mapping mechanism is also presented to effectively compress the sequence stream. As shown by experiments, the suggested methods, both the compression ratio and the compression/decompression duration of NGS data compressed using RBFQC, are superior to those achieved by other state-of-the-art genome compression methods. In comparison to GZIP, RBFQC may achieve a compression ratio of 80-140% for fixed-length datasets and 80-125% for variable-length datasets. Compared to domain-specific FastQ file referential genome compression techniques, RBFQC has a compression and decompression speed (total) improvement of 10-25%.


Assuntos
Compressão de Dados , Compressão de Dados/métodos , Algoritmos , Software , Sequenciamento de Nucleotídeos em Larga Escala/métodos , Genoma , Análise de Sequência de DNA/métodos
12.
BMC Bioinformatics ; 24(1): 369, 2023 Sep 30.
Artigo em Inglês | MEDLINE | ID: mdl-37777730

RESUMO

BACKGROUND: A large number of researchers have devoted to accelerating the speed of genome sequencing and reducing the cost of genome sequencing for decades, and they have made great strides in both areas, making it easier for researchers to study and analyze genome data. However, how to efficiently store and transmit the vast amount of genome data generated by high-throughput sequencing technologies has become a challenge for data compression researchers. Therefore, the research of genome data compression algorithms to facilitate the efficient representation of genome data has gradually attracted the attention of these researchers. Meanwhile, considering that the current computing devices have multiple cores, how to make full use of the advantages of the computing devices and improve the efficiency of parallel processing is also an important direction for designing genome compression algorithms. RESULTS: We proposed an algorithm (LMSRGC) based on reference genome sequences, which uses the suffix array (SA) and the longest common prefix (LCP) array to find the longest matched substrings (LMS) for the compression of genome data in FASTA format. The proposed algorithm utilizes the characteristics of SA and the LCP array to select all appropriate LMSs between the genome sequence to be compressed and the reference genome sequence and then utilizes LMSs to compress the target genome sequence. To speed up the operation of the algorithm, we use GPUs to parallelize the construction of SA, while using multiple threads to parallelize the creation of the LCP array and the filtering of LMSs. CONCLUSIONS: Experiment results demonstrate that our algorithm is competitive with the current state-of-the-art algorithms in compression ratio and compression time.


Assuntos
Compressão de Dados , Compressão de Dados/métodos , Análise de Sequência de DNA/métodos , Algoritmos , Genoma , Software , Sequenciamento de Nucleotídeos em Larga Escala/métodos
13.
Database (Oxford) ; 20232023 08 11.
Artigo em Inglês | MEDLINE | ID: mdl-37566631

RESUMO

The advancement of genetic sequencing techniques led to the production of a large volume of data. The extraction of genetic material from a sample is one of the early steps of the metagenomic study. With the evolution of the processes, the analysis of the sequenced data allowed the discovery of etiological agents and, by corollary, the diagnosis of infections. One of the biggest challenges of the technique is the huge volume of data generated with each new technology developed. To introduce an algorithm that may reduce the data volume, allowing faster DNA matching with the reference databases. Using techniques like lossy compression and substitution matrix, it is possible to match nucleotide sequences without losing the subject. This lossy compression explores the nature of DNA mutations, insertions and deletions and the possibility that different sequences are the same subject. The algorithm can reduce the overall size of the database to 15% of the original size. Depending on parameters, it may reduce up to 5% of the original size. Although is the same as the other platforms, the match algorithm is more sensible because it ignores the transitions and transversions, resulting in a faster way to obtain the diagnostic results. The first experiment results in an increase in speed 10 times faster than Blast while maintaining high sensitivity. This performance gain can be extended by combining other techniques already used in other studies, such as hash tables. Database URL https://github.com/ghc4/metagens.


Assuntos
Compressão de Dados , Compressão de Dados/métodos , Sequenciamento de Nucleotídeos em Larga Escala/métodos , Algoritmos , Análise de Sequência de DNA/métodos , DNA , Software
14.
Sensors (Basel) ; 23(12)2023 Jun 15.
Artigo em Inglês | MEDLINE | ID: mdl-37420788

RESUMO

This article describes an empirical exploration on the effect of information loss affecting compressed representations of dynamic point clouds on the subjective quality of the reconstructed point clouds. The study involved compressing a set of test dynamic point clouds using the MPEG V-PCC (Video-based Point Cloud Compression) codec at 5 different levels of compression and applying simulated packet losses with three packet loss rates (0.5%, 1% and 2%) to the V-PCC sub-bitstreams prior to decoding and reconstructing the dynamic point clouds. The recovered dynamic point clouds qualities were then assessed by human observers in experiments conducted at two research laboratories in Croatia and Portugal, to collect MOS (Mean Opinion Score) values. These scores were subject to a set of statistical analyses to measure the degree of correlation of the data from the two laboratories, as well as the degree of correlation between the MOS values and a selection of objective quality measures, while taking into account compression level and packet loss rates. The subjective quality measures considered, all of the full-reference type, included point cloud specific measures, as well as others adapted from image and video quality measures. In the case of image-based quality measures, FSIM (Feature Similarity index), MSE (Mean Squared Error), and SSIM (Structural Similarity index) yielded the highest correlation with subjective scores in both laboratories, while PCQM (Point Cloud Quality Metric) showed the highest correlation among all point cloud-specific objective measures. The study showed that even 0.5% packet loss rates reduce the decoded point clouds subjective quality by more than 1 to 1.5 MOS scale units, pointing out the need to adequately protect the bitstreams against losses. The results also showed that the degradations in V-PCC occupancy and geometry sub-bitstreams have significantly higher (negative) impact on decoded point cloud subjective quality than degradations of the attribute sub-bitstream.


Assuntos
Compressão de Dados , Humanos , Compressão de Dados/métodos , Croácia , Portugal
15.
Sensors (Basel) ; 23(12)2023 Jun 17.
Artigo em Inglês | MEDLINE | ID: mdl-37420828

RESUMO

Signal transmission plays an important role in the daily operation of structural health monitoring (SHM) systems. In wireless sensor networks, transmission loss often occurs and threatens reliable data delivery. The massive amount of data monitoring also leads to a high signal transmission and storage cost throughout the system's service life. Compressive Sensing (CS) provides a novel perspective on alleviating these problems. Based on the sparsity of vibration signals in the frequency domain, CS can reconstruct a nearly complete signal from just a few measurements. This can improve the robustness of data loss while facilitating data compression to reduce transmission demands. Extended from CS methods, distributed compressive sensing (DCS) can exploit the correlation across multiple measurement vectors (MMV) to jointly recover the multi-channel signals with similar sparse patterns, which can effectively enhance the reconstruction quality. In this paper, a comprehensive DCS framework for wireless signal transmission in SHM is constructed, incorporating the process of data compression and transmission loss together. Unlike the basic DCS formulation, the proposed framework not only activates the inter-correlation among channels but also provides flexibility and independence to single-channel transmission. To promote signal sparsity, a hierarchical Bayesian model using Laplace priors is built and further improved as the fast iterative DCS-Laplace algorithm for large-scale reconstruction tasks. Vibration signals (e.g., dynamic displacement and accelerations) acquired from real-life SHM systems are used to simulate the whole process of wireless transmission and test the algorithm's performance. The results demonstrate that (1) DCS-Laplace is an adaptative algorithm that can actively adapt to signals with various sparsity by adjusting the penalty term to achieve optimal performance; (2) compared with CS methods, DCS methods can effectively improve the reconstruction quality of multi-channel signals; (3) the Laplace method has advantages over the OMP method in terms of reconstruction performance and applicability, which is a better choice in SHM wireless signal transmission.


Assuntos
Compressão de Dados , Humanos , Compressão de Dados/métodos , Processamento de Sinais Assistido por Computador , Teorema de Bayes , Algoritmos , Eletrocardiografia/métodos , Arritmias Cardíacas
16.
Sensors (Basel) ; 23(12)2023 Jun 19.
Artigo em Inglês | MEDLINE | ID: mdl-37420864

RESUMO

The utilization of quick compression-sensed magnetic resonance imaging results in an enhancement of diffusion imaging. Wasserstein Generative Adversarial Networks (WGANs) leverage image-based information. The article presents a novel G-guided generative multilevel network, which leverages diffusion weighted imaging (DWI) input data with constrained sampling. The present study aims to investigate two primary concerns pertaining to MRI image reconstruction, namely, image resolution and reconstruction duration. The implementation of simultaneous k-q space sampling has been found to enhance the performance of Rotating Single-Shot Acquisition (RoSA) without necessitating any hardware modifications. Diffusion weighted imaging (DWI) is capable of decreasing the duration of testing by minimizing the amount of input data required. The synchronization of diffusion directions within PROPELLER blades is achieved through the utilization of compressed k-space synchronization. The grids utilized in DW-MRI are represented by minimal-spanning trees. The utilization of conjugate symmetry in sensing and the Partial Fourier approach has been observed to enhance the efficacy of data acquisition as compared to unaltered k-space sampling systems. The image's sharpness, edge readings, and contrast have been enhanced. These achievements have been certified by numerous metrics including PSNR and TRE. It is desirable to enhance image quality without necessitating any modifications to the hardware.


Assuntos
Compressão de Dados , Imagem de Difusão por Ressonância Magnética , Imagem de Difusão por Ressonância Magnética/métodos , Algoritmos , Compressão de Dados/métodos , Imageamento por Ressonância Magnética/métodos , Interpretação de Imagem Assistida por Computador/métodos , Processamento de Imagem Assistida por Computador/métodos
17.
Ultrasonics ; 134: 107063, 2023 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-37300907

RESUMO

To enhance the effectiveness and safety of focused ultrasound (FUS) therapy, ultrasound image-based guidance and treatment monitoring are crucial. However, the use of FUS transducers for both therapy and imaging is impractical due to their low spatial resolution, signal-to-noise ratio (SNR), and contrast-to-noise ratio (CNR). To address this issue, we propose a new method that significantly improve the quality of images obtained by a FUS transducer. The proposed method employs coded excitation to enhance SNR and Wiener deconvolution to solve the problem of low axial resolution resulting from the narrow spectral bandwidth of FUS transducers. Specifically, the method eliminates the impulse response of a FUS transducer from received ultrasound signals using Wiener deconvolution, and pulse compression is performed using a mismatched filter. Simulation and commercial phantom experiments confirmed that the proposed method significantly improves the quality of images acquired by the FUS transducer. The -6 dB axial resolution was improved 1.27 mm to 0.37 mm that was similar to the resolution achieved by the imaging transducer, i.e., 0.33 mm. SNR and CNR also increased from 16.5 dB and 0.69 to 29.1 dB and 3.03, respectively, that were also similar to those by the imaging transducer (27.8 dB and 3.16). Based on the results, we believe that the proposed method has great potential to enhance the clinical utility of FUS transducers in ultrasound image-guided therapy.


Assuntos
Compressão de Dados , Compressão de Dados/métodos , Ultrassonografia/métodos , Razão Sinal-Ruído , Simulação por Computador , Imagens de Fantasmas , Transdutores
18.
Biomed Phys Eng Express ; 9(4)2023 06 14.
Artigo em Inglês | MEDLINE | ID: mdl-37279702

RESUMO

Background. In telecardiology, the bio-signal acquisition processing and communication for clinical purposes occupies larger storage and significant bandwidth over a communication channel. Electrocardiograph (ECG) compression with effective reproductivity is highly desired. In the present work, a compression technique for ECG signals with less distortion by using a non-decimated stationary wavelet with a run-length encoding scheme has been proposed.Method. In the present work non-decimated stationary wavelet transform (NSWT) method has been developed to compress the ECG signals. The signal is subdivided into N levels with different thresholding values. The wavelet coefficients having values larger than the threshold are evaluated and the remaining are suppressed. In the presented technique, the biorthogonal (bior) wavelet is employed as it improves the compression ratio as well percentage root means square ratio (PRD) when compared to the existing method and exhibits improved results. After pre-processing, the coefficients are subjected to the Savitzky-Golay filter to remove corrupted signals. The wavelet coefficients are then quantized using dead-zone quantization, which eliminates values that are close to zero. To encode these values, a run-length encoding (RLE) scheme is applied, resulting in compressed ECG signals.Results. The presented methodology has been evaluated on the MITDB arrhythmias database which contains 4800 ECG fragments from forty-eight clinical records. The proposed technique has achieved an average compression ratio of 33.12, PRD of 1.99, NPRD of 2.53, and QS of 16.57, making it a promising approach for various applications.Conclusion. The proposed technique exhibits a high compression ratio and reduces distortion compared to the existing method.


Assuntos
Compressão de Dados , Análise de Ondaletas , Algoritmos , Compressão de Dados/métodos , Processamento de Sinais Assistido por Computador , Eletrocardiografia/métodos
19.
Bioinformatics ; 39(5)2023 05 04.
Artigo em Inglês | MEDLINE | ID: mdl-37129540

RESUMO

SUMMARY: We describe a compression scheme for BUS files and an implementation of the algorithm in the BUStools software. Our compression algorithm yields smaller file sizes than gzip, at significantly faster compression and decompression speeds. We evaluated our algorithm on 533 BUS files from scRNA-seq experiments with a total size of 1TB. Our compression is 2.2× faster than the fastest gzip option 35% slower than the fastest zstd option and results in 1.5× smaller files than both methods. This amounts to an 8.3× reduction in the file size, resulting in a compressed size of 122GB for the dataset. AVAILABILITY AND IMPLEMENTATION: A complete description of the format is available at https://github.com/BUStools/BUSZ-format and an implementation at https://github.com/BUStools/bustools. The code to reproduce the results of this article is available at https://github.com/pmelsted/BUSZ_paper.


Assuntos
Compressão de Dados , Sequenciamento de Nucleotídeos em Larga Escala , Sequenciamento de Nucleotídeos em Larga Escala/métodos , Algoritmos , Software , Compressão de Dados/métodos , Sequenciamento do Exoma
20.
Med Phys ; 50(12): 7700-7713, 2023 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-37219814

RESUMO

BACKGROUND: Diffusion magnetic resonance imaging (dMRI) provides a powerful tool to non-invasively investigate neural structures in the living human brain. Nevertheless, its reconstruction performance on neural structures relies on the number of diffusion gradients in the q-space. High-angular (HA) dMRI requires a long scan time, limiting its use in clinical practice, whereas directly reducing the number of diffusion gradients would lead to the underestimation of neural structures. PURPOSE: We propose a deep compressive sensing-based q-space learning (DCS-qL) approach to estimate HA dMRI from low-angular dMRI. METHODS: In DCS-qL, we design the deep network architecture by unfolding the proximal gradient descent procedure that addresses the compressive sense problem. In addition, we exploit a lifting scheme to design a network structure with reversible transform properties. For implementation, we apply a self-supervised regression to enhance the signal-to-noise ratio of diffusion data. Then, we utilize a semantic information-guided patch-based mapping strategy for feature extraction, which introduces multiple network branches to handle patches with different tissue labels. RESULTS: Experimental results show that the proposed approach can yield a promising performance on the tasks of reconstructed HA dMRI images, microstructural indices of neurite orientation dispersion and density imaging, fiber orientation distribution, and fiber bundle estimation. CONCLUSIONS: The proposed method achieves more accurate neural structures than competing approaches.


Assuntos
Algoritmos , Compressão de Dados , Humanos , Imagem de Difusão por Ressonância Magnética/métodos , Compressão de Dados/métodos , Encéfalo/diagnóstico por imagem , Razão Sinal-Ruído , Processamento de Imagem Assistida por Computador/métodos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...